Although user satisfaction is widely used by researchers and practitioners to evaluate information system success, important issues related to its meaning and measurement across population subgroups have not been adequately resolved. To be most useful in decision-making, instruments like end-user computing satisfaction (EUCS), which are designed to evaluate system success, should be robust. That is, they should enable comparisons by providing equivalent measurement across diverse samples that represent the variety of conditions or population subgroups present in organizations. Using a sample of 1,166 responses, the EUCS instrument is tested for measurement invariance across four dimensions--respondent positions, types of application, hardware platforms, and modes of development. While the results suggest that the meaning of user satisfaction is context sensitive and differs across population subgroups, the 12 measurement items are invariant across all four dimensions. The 12-item summed scale enables researchers or practitioners to compare EUCS scores across the instrument's originally intended universe of applicability.
Efforts to develop measures of Internet commerce success have been hampered by (1) the rapid development and use of Internet technologies and (2) the lack of conceptual bases necessary to develop success measures. In a recent study, Keeney (1999) proposed two sets of variables labeled as means objectives and fundamental objectives that influence Internet shopping. Means objectives, he argues, help businesses achieve what is important for their customers—fundamental objectives. Based on Keeney's work, this paper describes the development of two instruments that together measure the factors that influence Internet commerce success. One instrument measures the means objectives that influence online purchase (e.g., Internet vendor trust) and the other measures the fundamental objectives that customers perceive to be important for Internet commerce (e.g., Internet product value). In phase one of the instrument development process, we generated 125 items for means and fundamental objectives. Using a sample of 199 responses by individuals with Internet shopping experience, these constructs were examined for reliability and validity. The Phase 1 results suggested a 4-factor, 21-item instrument to measure means objectives and a 4-factor, 17-item instrument to measure fundamental objectives. In Phase 2 of the instrument development process, we gathered a sample of 421 responses to further explore the 2 instruments. With minor modifications, the Phase 2 data support the 2 models. The Phase 2 results suggest a 5-factor, 21-item instrument that measures means objectives in terms of Internet product choice, online payment, Internet vendor trust, shopping travel, and Internet shipping errors. Results also suggest a 4-factor, 16-item instrument that measures fundamental objectives in terms of Internet shopping convenience, Internet ecology, Internet customer relation, and Internet product value. Evidence of reliability and discriminant, construct, and content validity is presented for the hypothesized measurement models. The paper concludes with discussions on the usefulness of these measures and future research ideas.
This article presents a confirmatory factor analysis of the end-user computing satisfaction index. User satisfaction is often thought to be the most important measure of success for information systems. This study uses LISREL VII to test hypothesis against sample data. The sample data was collected from 409 computer end-users from 18 different organizations using 139 computer applications including accounts payable, financial planning, inventory, and computer-aided design. The data was statistically analyzed for a variety of models.
The increasing need for integration and the rapid growth of online systems have made telecommunications a vital part of management information systems (MIS). In search of competitive advantage, organizations make significant investments in telecommunications. Telecommunications management is becoming a top priority of information systems executives. The MIS literature suggests that steering committees are effective means of managing information systems. However, there is no information on how steering committees impact the management of the telecommunications function. Drawing on organizational theory and MIS literature, a framework is presented that relates firm size and telecommunications steering committees to planning practices and organizational recognition and support. Using a survey of 137 organizations, this framework is examined. The results of this exploratory research suggest that use of a telecommunications steering committee is associated with firm size, planning practices, and top management recognition and support. As firms grow, they tend to more frequently use steering committees for inter unit coordination, setting policies, allocating resources, and monitoring progress. These steering committees can also pro- mote organizational recognition and secure funding commitments for the telecommunications function.
User documentation is an important tool for communication. It enhances the value of an application to the user and, in turn, improves user satisfaction. The impact of user documentation is even more considerable since interest in end-user computing is booming. Yet, there are very few practical managerial tools for measuring the quality of user documentation. This article examines two alternative forms of an instrument for measuring user perceptions of the quality of user documentation. The reliability and validity of these instruments are examined. Using a survey of 618 end-users from 44 firms, the author cross validates this instrument, conducts a factor analysis, and assesses the construct validity of the instrument. Furthermore, the reliability and validity of the instrument is assessed by nature and type of application. The results indicate a strong support for the instrument. With these instruments, management can more easily monitor and control the condition of their firm's user documentation.
This article contrasts traditional versus end-user computing environments and reports on the development of an instrument which merges ease of use and information product items to measure the satisfaction of users who directly interact with the computer for a specific application. Using a survey of 618 end users, the researchers conducted a factor analysis and modified the instrument. The results suggest a 12-item instrument that measures five components of end-user satisfaction -- content, accuracy, format, ease of use, and timeliness. Evidence of the instrument's discriminant validity is presented. Reliability and validity is assessed by nature and type of application. Finally, standards for evaluating end-user applications are presented, and the instrument's usefulness for achieving more precision in research questions is explored.